2025-07-08 20:01:27
This is the quarterly links and updates post, a selection of things I’ve been reading and doing for the past few months.
Tetris was invented in 1985, it came out on the NES in 1989, but the best way to play was only discovered in 2021.1 Previously, players would just try to tap the buttons really fast (“hypertapping”), until a 15-year-old named Christopher “CheeZ” Martinez realized that you could actually press the buttons faster if you roll your fingers across the back of the controller (“rolling”). CheeZ went on to set world records using his technique, but he wasn’t on top for long. Other players soon perfected their rolls, and CheeZ lost in a first-round upset at the 2022 Classic Tetris World Championship to another “roller”, a 48th-seed named BirbWizard.
I love this because it shows how low-hanging discoveries can just sit there for decades without anyone seeing them, even when thousands of dollars are on the line. (Seriously—the first place finisher in the 2024 championships won $10k.) People spent 40 years trying to tap buttons faster without ever realizing they should be tapping the other side of the controller instead.
But I also hate this, because:
Speaking of video games, I’ve always been mystified by “simulator” games built around mundane tasks, like Woodcutter Simulator, Euro Truck Simulator, PC Building Simulator, and Liquor Store Simulator (the promotional video promises that you get to “verify documents”). Then there’s Viscera Cleanup Detail, where you clean up after other people’s gunfights, PowerWash Simulator, where you powerwash things, and Robot Vacuum Simulator, where you play as a Roomba. And if all of that sounds too stimulating, you can try Rock Simulator, where you watch a rock on your screen as time passes. (Reviews are “very positive”.)2
It’s easy to deride or pathologize these games, so I was taken aback when I saw this defense from the video game streamer Northernlion3:
This is not brain rot, this is zen. You don’t get it.
Something being boring doesn’t make it brain rot. Something being exciting but having no actual quality to it is brain rot. This is boring. This is brain exercise. This is brain genesis.
[...] This content has you guys typing like real motherfuckers in chat. You’re typing with emotion. You’re typing “good luck.” You’re typing “I can’t watch this shit.” You’re typing “I can’t bear to be a part of this experience anymore.”
You’re feeling something. You’re feeling something human, man!
Maia Adar of Cosimo Research investigates whether straight men and women are attracted to the, uh, intimate smells of the opposite sex: “The results suggest that females stand closer to males who have fresh ball sweat applied to their neck.”
Cosimo’s next project: some people swear that taping your mouth shut overnight improves your sleep quality and reduces snoring. Does it? You can sign up for their study here.
Some cool developments in scientific publishing:
The new edition of the Handbook of Social Psychology is now available online and for free. I mentioned before that Mahzarin Banaji, one of the most famous social psychologists working today, became a psychologist because she found a copy of the Handbook at a train station. Now, thanks to the internet, you can become a psychologist without even taking the train!
Open Philanthropy and the Alfred P. Sloan Foundation are running a “pop-up journal” aimed at answering one question: what are the social returns to investments in research and development?4
, the chair of the Navigation Fund, announces that they’ll no longer use their billions of dollars to support publications in traditional scientific journals:
We began this as an experiment at Arcadia a few years ago. At the time, I expected some eventual efficiency gains. What I didn’t expect was how profoundly it would reshape all of our science. Our researchers began designing experiments differently from the start. They became more creative and collaborative. The goal shifted from telling polished stories to uncovering useful truths. All results had value, such as failed attempts, abandoned inquiries, or untested ideas, which we frequently release through Arcadia’s Icebox. The bar for utility went up, as proxies like impact factors disappeared.
People often wonder: what do we find normal these days that our descendants will find outrageous? I submit: our grandchildren will be baffled by our resistance to toilets with built-in bidets.
of has a great seven-part series about the most consequential email list in history, a single listserv that birthed Effective Altruism, rationalism, the AI Risk movement, Bitcoin, several cults, several research institutes that may also have been cults, a few murders, and some very good blogs.
Here’s a thing I didn’t know: in 1972, the United States started giving Medicare coverage to anyone with end-stage renal disease, regardless of age, effectively doing “socialized medicine for an organ”. Today, 550,000 Americans receive dialysis through this plan, which costs “over one percent of the federal budget, or more than six times NASA’s budget”. I bring this up not because I think that’s too much (I’m glad that people don’t die), but because it’s hilarious how little I understand about what things the federal government pays for. Maybe I’m not the only one!
If you send a voice note via iMessage and mention “Chuck E. Cheese”, it goes through normally. If instead you mention “Dave & Busters”, your message will never arrive. It just disappears. Why? The answer is in this perfect podcast episode.
The coolest part of Civilization games is the Tech Tree, where you get to choose the discoveries that your citizens work on, from animal husbandry to giant death robots. That tree was apparently made up on the fly, but now has made an actual tech tree for humanity, which includes 1,550 technologies and 1,700 links between them. Here’s my favorite connection:
on Why Psychology Hasn’t Had a New Big Idea in Decades. My favorite line:
To my mind, the question isn’t whether we decide to expand the scope of psychology to plants. The question is whether there’s any prospect at all of keeping plants out!
He got some good comments and responded to them here.
One of my favorite genres of art is “things that look way more modern than they are”, so I was very excited to run into Giovanni Battista Bracelli’s Oddities of Various Figures (1624):
In 1915, a doctor named Earnest Codman was like “hey guys, shouldn’t we keep track of patient outcomes so we know whether our treatments actually work?” and everyone else was like “no that’s a terrible idea”. So he did what anyone would do: he commissioned an extremely petty political cartoon and debuted it at a meeting of the local medical society. Apparently he didn’t pay that much for the commission, because it looks like it was drawn by a high schooler, not to mention the unhinged captions, the mixed metaphors (the golden goose is...also an ostrich?), and the bug helpfully labeled “humbug”. Anyway, this got him fired from his job at Harvard Medical School.
Codman’s ideas won in the end, and he was eventually hired back. To answer the Teddy Roosevelt-looking guy in the middle, apparently you can make a living as a clinical professor without humbug!
There’s an anime called The Melancholy of Haruhi Suzumiya that’s about time travel and, appropriately, you can watch the episodes in any order. In 2011, someone posted on 4Chan asking: “If viewers wanted to see the series in every possible order, what is the shortest list of episodes they’d have to watch?” An anonymous commenter replied with a proof demonstrating a lower bound. Mathematicians eventually realized that the proof was a breakthrough in a tricky permutation problem and published a paper verifying it. The first author of that paper is “Anonymous 4Chan Poster”.
Silent films used to be accompanied by live musicians, but then synchronized sound came along. The American Federation of Musicians tried to fight back with a huge ad campaign opposing prerecorded music in movie theaters. They lost, but they did a great job:
Source: Paleofuture
These ads are a reminder: when a profession gets automated away, it’s the first generation, the one who has to live through the transition, that feels the pain. And then people forget it was any other way.
Uri Bram (Atoms vs. Bits) is releasing a physical version of his hit online party game called Person Do Thing, which is kinda like Taboo but better.
writes a great blog about music and data. A while back, he started listening to every Billboard #1 hit song, in order, from the 1950s to today, and as he listened his spreadsheets grew and grew and eventually turned into his new book: Uncharted Territory.
In much sadder news, my friend was denied entry to the US based on some Substack posts he wrote covering the protests at Columbia when he was a journalism student there. You can read his account here.
Thanks to everyone who submitted to the 2025 Experimental History Blog Post Competition, Extravaganza, and Jamboree! I’m reading all the submissions now, and I plan to announce the winners in September.
I was on Spencer Greenberg’s Clearer Thinking podcast with the appropriately-titled episode “How F***ed Is Psychology?”
I recently wrote about how to unpack when deciding on a career (“The Coffee Beans Procedure”); wrote a detailed prompt that will help an AI do this with you.
A certain “Adam Mastroiannii Sub Stack” has appeared in my comments hawking some kind of WhatsApp scam. I’ve banned him and deleted the comments. Thanks to the folks who let me know—please give me a holler if he pops up again. The actual author of a Substack post always has a little tag that says “author” next to their name when they reply to comments, so if you ever see someone who looks like me but doesn’t have that tag, please execute a citizen’s arrest.
And finally, a post from the archive. All the promises in this post are still active5:
That’s all for now! Gotta get back to playing Blog Simulator 4.
-Adam
Credit to who mentioned this in his post How to Walk Through Walls.
The scientist-bloggers Slime Mold Time Mold speculate that humans may have several “hygiene” emotions that drive us to keep our living environments spic-and-span, which might explain the odd number of cleaning simulators, at least.
As quoted in this piece by .
In the meantime, has a great first pass at this question.
If you emailed me about a research project and I haven’t gotten back to you, I’m sorry and please email me again!
2025-07-01 21:09:21
You should never trust a curmudgeon. If someone hates everything, it doesn’t mean much when they also hate this thing. That’s why, whenever I get hopped up on criticizing the current state of psychology, I stop and ask myself, “Okay, but what’s good?” If I can’t find anything, then my criticisms probably say more about me than they say…
2025-06-24 20:12:10
I meet a lot of people who don’t like their jobs, and when I ask them what they’d rather do instead, about 75% say something like, “Oh, I dunno, I’d really love to run a little coffee shop.” If I’m feeling mischievous that day, I ask them one question: “Where would you get the coffee beans?”
If that’s a stumper, here are some followups:
Which kind of coffee mug is best?
How much does a La Marzocco espresso machine cost?
Would you bake your blueberry muffins in-house or would you buy them from a third party?
What software do you want to use for your point-of-sale system? What about for scheduling shifts?
What do you do when your assistant manager calls you at 6am and says they can’t come into work because they have diarrhea?
The point of the Coffee Beans Procedure is this: if you can’t answer those questions, if you don’t even find them interesting, then you should not open a coffee shop, because this is how you will spend your days as a cafe owner. You will not be sitting droopy-lidded in an easy chair, sipping a latte and greeting your regulars as you page through Anna Karenina. You will be running a small business that sells hot bean water.
The Coffee Beans Procedure is a way of doing what psychologists call unpacking. Our imaginations are inherently limited; they can’t include all details at once. (Otherwise you run into Borges’ map problem—if you want a map that contains all the details of the territory that it’s supposed to represent, then the map has to be the size of the territory itself.) Unpacking is a way of re-inflating all the little particulars that had to be flattened so your imagination could produce a quick preview of the future, like turning a napkin sketch into a blueprint.1
When people have a hard time figuring out what to do with their lives, it’s often because they haven’t unpacked. For example, in grad school I worked with lots of undergrads who thought they wanted to be professors. Then I’d send ‘em to my advisor Dan, and he would unpack them in 10 seconds flat. “I do this,” he would say, miming typing on a keyboard, “And I do this,” he would add, gesturing to the student and himself. “I write research papers and I talk to students. Would you like to do those things?”
Most of those students would go, “Oh, no I would not like to do those things.” The actual content of a professor’s life had never occurred to them. If you could pop the tops of their skulls and see what they thought being a professor was like, you’d probably find some low-res cartoon version of themselves walking around campus in a tweed jacket going, “I’m a professor, that’s me! Professor here!” and everyone waving back to them going, “Hi professor!”
Or, even more likely, they weren’t picturing anything at all. They were just thinking the same thing over and over again: “Do I want to be a professor? Hmm, I’m not sure. Do I want to be a professor? Hmm, I’m not sure.”
Why is it so hard to unpack, even a little bit? Well, you know how when you move to a new place and all of your unpacked boxes confront you every time you come home? And you know how, if you just leave them there for a few weeks, the boxes stop being boxes and start being furniture, just part of the layout of your apartment, almost impossible to perceive? That’s what it’s like in the mind. The assumptions, the nuances, the background research all get taped up and tucked away. That’s a good thing—if you didn’t keep most of your thoughts packed, trying to answer a question like “Do I want to be a professor?” would be like dumping everything you own into a giant pile and then trying to find your one lucky sock.
When you fully unpack any job, you’ll discover something astounding: only a crazy person should do it.
Do you want to be a surgeon? = Do you want to do the same procedure 15 times a week for the next 35 years?
Do you want to be an actor? = Do you want your career to depend on having the right cheekbones?
Do you want to be a wedding photographer? = Do you want to spend every Saturday night as the only sober person in a hotel ballroom?
If you think no one would answer “yes” to those questions, you’ve missed the point: almost no one would answer “yes” to those questions, and those proud few are the ones who should be surgeons, actors, and wedding photographers.
High-status professions are the hardest ones to unpack because the upsides are obvious and appealing, while the downsides are often deliberately hidden and tolerable only to a tiny minority. For instance, shortly after college, I thought I would post a few funny videos on YouTube and, you know, become instantly famous2. I gave up basically right away. I didn’t have the madness necessary to post something every week, let alone every day, nor did it ever occur to me that I might have to fill an entire house with slime, or drive a train into a giant pit, or buy prosthetic legs for 2,000 people. If you read the “leaked” production guide written by Mr. Beast, the world’s most successful YouTuber, you’ll quickly discover how nutso he is:
I’m willing to count to one hundred thousand, bury myself alive, or walk a marathon in the world’s largest pairs of shoes if I must. I just want to do what makes me happy and ultimately the viewers happy. This channel is my baby and I’ve given up my life for it. I’m so emotionally connected to it that it’s sad lol.
(Those aren’t hypothetical examples, by the way; Mr. Beast really did all those things.)
Apparently 57% of Gen Z would like to be social media stars, and that’s almost certainly because they haven’t unpacked what it would actually take to make it. How many of them have Mr. Beast-level insanity? How many are willing to become indentured servants to the algorithm, to organize their lives around feeding it whatever content it demands that day? One in a million?
Another example: lots of people would like to be novelists, but when you unpack what novelists actually do, you realize that basically no one should be a novelist. For instance, how did Tracy Wolff, author of the Crave “romantasy” series, become one of the most successful writers alive? Well, this New Yorker piece casually mentions that Wolff wrote “more than sixty” books between 2007 and 2018. That’s 5.5 novels per year, every year, for 11 years, before she hit it big. And she’s still going! She has so many books now that her website has a search bar. Or you can browse through categories like “Contemporary Romance (Rock Stars/Bad Boys)”, “Contemporary Erotic Billionaire Romance”, “Contemporary Romance (Harlequin Desire)”, and “Contemporary New Adult Romance (Snowboarders!)”.
Wolff and Beast might seem extreme, but they’re only extreme in terms of output, not in terms of time on task. This is the obvious-but-overlooked insight that you find when you unpack: people spend so much time doing their jobs. Hours! Every day! It’s 2pm on a Tuesday and you’re doing your job, and now it’s 3:47pm and you’re still doing it. There’s no amount of willpower that can carry you through a lifetime of Tuesday afternoons. Whatever you’re supposed to be doing in those hours, you’d better want to do it.
For some reason, this never seems to occur to people. I was the tallest kid in my class growing up, and older men would often clap me on the back and say, “You’re gonna be a great basketball player one day!” When I’d balk, they’d be like, “Don’t you want to be on a team? Don’t you want represent your school? Don’t you want to wear a varsity jacket and go to regionals?” But those are the wrong questions. The right questions, the unpacked questions, are: “Do you want to spend three hours practicing basketball every day? Do you want to dribble and shoot over and over again? On Thursday nights, do you want to ride the bus and sit on the bench while your more talented friends compete, secretly hoping that Brent sprains his ankle so you could have a chance to play?” And honestly, no! I don’t! I’d rather be at home playing Runescape.
When you come down from the 30,000-foot view that your imagination offers you by default, when you lay out all the minutiae of a possible future, when you think of your life not as an impressionistic blur, but as a series of discrete Tuesday afternoons full of individual moments that you will live in chronological order and without exception, only then do you realize that most futures make sense exclusively for a very specific kind of person. Dare I say, a crazy person.
Fortunately, I have good news: you are a crazy person.
I don’t mean you’re crazy in the sense that you have a mental illness, although maybe you do. I mean crazy in the sense that you are far outside the norm in at least one way, and perhaps in many ways.
Some of you guys wake up at 5am to make almond croissants, some of you watch golf on TV, and some of you are willing to drive an 80,000-pound semi truck full of fidget spinners across the country. There are people out there who like the sound of rubbing sheets of Styrofoam together, people who watch 94-part YouTube series about the Byzantine Empire, people who can spend an entire long-haul flight just staring straight ahead. Do you not realize that, to me, and to almost everyone else, you are all completely nuts?
No, you probably don’t realize that, because none of us do. We tend to overestimate the prevalence of our preferences, a phenomenon that psychologists call the “false consensus effect”3. This is probably because it’s really really hard to take other people’s perspectives, so unless we run directly into disconfirming evidence, we assume that all of our mental settings are, in fact, the defaults. Our idiosyncrasies may never even occur to us. You can, for instance, spend your whole life seeing three moons in the sky, without realizing that everybody else sees only one:
the first time i looked up into the night sky after i got glasses, [I] realized that you can, in fact, see the moon clearly. i assumed people who depicted it in art were taking creative license bc they knew it should look like that for some reason, and that the human eye was incapable of seeing the moon without also seeing two other, blurrier moons, sort of overlapping it
In my experience, whenever you unpack somebody, you inevitably discover something extremely weird about them. Sometimes you don’t have to dig that far, like when your friend tells you that she likes “found” photographs—the abandoned snapshots that turn up at yard sales and charity shops—and then adds that she has collected 20,000 of them. But sometimes the craziness is buried deep, often because people don’t think it’s crazy at all, like when a friend I knew for years casually disclosed that she had dumped all of her previous boyfriends because they had been insufficiently “menacing”.
This is why people get so brain-constipated when they try to choose a career, and why they often pick the wrong one: they don’t understand the craziness that they have to offer, nor the craziness that will be demanded of them, and so they spend their lives jamming their square-peg selves into round-hole jobs. For example, when I was in academia, there was this bizarre contingent of administrators who found college students vaguely vexing and exasperating. When the sophomores would, say, make a snowman in the courtyard with bodacious boobs, these dour admins would shake their heads and be like, “College kids are a real pain in the ass, huh!” They didn’t seem to realize that their colleagues actually liked hanging out with 18-22 year-olds, and that the occasional busty snowman was actually what made the job interesting. I don’t think these curmudgeonly managers even thought such a preference was possible.
Another example: when I was a pimply-faced teenager, I went to this dermatologist who always seemed annoyed to see patients. Like, how dare we bother him by seeking the services that he provides? Meanwhile, Dr. Pimple Popper—a YouTube account that does exactly what it says on the tin—has nearly 9 million subscribers. Clearly, there are people out there who find acne fascinating, and dermatology is the one of the most competitive medical specialties, but apparently you can, through sheer force of will, lack of self-knowledge, and refusal to unpack the details, earn the right to do a job you hate for the rest of your life.
On the other hand, when people match their crazy to the right outlet, they become terrifyingly powerful. A friend from college recently reminded me of this guy I’ll call Danny, who was crazy in a way that was particularly useful for politics, namely, he was incapable of feeling humiliated. When Danny got to campus freshman year, he announced his candidacy for student body president by printing out like a thousand copies of his CV—including his SAT score!—and plastering them all over campus. He was, of course, widely mocked. And then the next year, he won. It turns out that people vote for the name that they recognize, and it doesn’t really matter why they recognize it. By the time Danny ran for reelection and won in a landslide, he was no longer the goofy freshman who taped a picture of his own face to every lamp post. At that point, he was the president.45
Unpacking is easy and free, but almost no one ever does it because it feels weird and unnatural. It’s uncomfortable to confront your own illusion of explanatory depth, to admit that you really have no idea what’s going on, and to keep asking stupid questions until that changes.
Making matters worse, people are happy to talk about themselves and their jobs, but they do it at this unhelpful, abstract level where they say things like, “oh, I’m the liaison between development and sales”. So when you’re unpacking someone’s job, you really gotta push: what did you do this morning? What will you do after talking to me? Is that what you usually do? If you’re sitting at your computer all day, what’s on your computer? What programs are you using? Wow, that sounds really boring, do you like doing that, or do you endure it?
You’ll discover all sorts of unexpected things when unpacking, like how firefighters mostly don’t fight fires, or how Twitch streamers don’t just “play video games”; they play video games for 12 hours a day. But you’re not just unpacking the job; you’re also unpacking yourself. Do any aspects of this job resemble things you’ve done before, and did you like doing those things? Not “Did you like being known as a person who does those things?” or “Do you like having done those things?” but when you were actually doing them, did you want to stop, or did you want to continue? These questions sound so stupid that it’s no wonder no one asks them, and yet, somehow, the answers often surprise us.
That’s certainly true for me, anyway. I never unpacked any job I ever had before I had it. I would just show up on the first day and discover what I had gotten myself into, as if the content of a job was simply unknowable before I started doing it, a sort of “we have to pass the bill to find out what’s in it” kind of situation. That’s how I spent the summer of 2014 as a counselor at a camp for 17-year-olds, even though I could have easily known that job would require activities that I hated, like being around 17-year-olds. Could I have known specifically that my job would include such tasks as “escorting kids across campus because otherwise they’ll flee into the woods” or “trying to figure out whether anyone brought booze to the dance by surreptitiously sniffing kids’ breath?” No. But had I unpacked even a little bit, I would have picked a different way to spend my summer, like selling booze to kids outside the dance.
It’s no wonder that everyone struggles to figure what to do with their lives: we have not developed the cultural technology to deal with this problem because we never had to. We didn’t exactly evolve in an ancestral environment with a lot of career opportunities. And then, once we invented agriculture, almost everyone was a farmer the next 10,000 years. “What should I do with my life?” is really a post-1850 problem, which means, in the big scheme of things, we haven’t had any time to work on it.
The beginning of that work is, I believe, unpacking. As you slice open the boxes and dump out the components of your possible futures, I hope you find the job that’s crazy in the same way that you are crazy. And then I hope you go for it! Shoot for the stars! Even if you miss, you’ll still land on one of the three moons.
You can think of unpacking as the opposite of attribute substitution; see How to Be Wrong and Feel Bad.
In my defense, this was a decade ago, closer to the days when you could become world famous by doing a few different dances in a row.
There is also a “false uniqueness effect”, but it seems to show up more rarely, on traits where people are motivated to be better than others, or when people have biased information about themselves. So people who like Hawaiian pizza probably think their opinion is more common than it is (false consensus). But if you pride yourself on the quality of your homemade Hawaiian pizza, you probably also overestimate your pizza-making skills (false uniqueness).
I’m pretty sure every campus politician was like this. During one election cycle, the pro-Palestine and pro-Israel groups started competing petitions to remove/keep a brand of hummus in the dining hall that allegedly had ties to the IDF. One of the guys running for class rep signed both petitions. When someone called him out, his response was something like, “I’m just glad we’re having dialogue.” Anyway, he won the election.
A few years later, a sophomore ran for student body president on a parody campaign, promising waffle fries and “bike reform.” He won a plurality of votes in the general election, but lost in the runoff, though he did get a write-up in the New York Times. Now he’s a doctor.
Top-tier insanity can sometimes make up for mid-tier talent. I’ve been in five-ish different improv communities, and in every single one there was someone who was pretty successful despite not being very good at improv. These folks were willing to mortgage the rest of their life to support their comedy habit—they’d half-ass their jobs, skip class, ignore their partners and kids, and in return they could show up for every audition, every gig, every side project. Their laser focus on their dumb art didn’t make them great, but it did make them available. Everybody knew them because they were always around, and so when one of your cast mates dropped out at the last second and you needed someone to fill in, you’d go, “We can always call Eric.” If you’ve ever seen someone on Saturday Night Live who isn’t very funny and wondered to yourself, “How did they get there?”, maybe that’s how.
2025-06-10 20:11:22
It’s cool to run big, complicated science experiments, but it’s also a pain in the butt. So here’s a challenge I set for myself: what’s the lowest-effort study I could run that would still teach me something? Specifically, these studies should:
Take less than 3 hours
Cost less than $20
Show me something I didn’t already know
Be a “hoot”
I call these Dumb Studies, because they’re dumb. Here are three of them.
(You can find all the data and code here.)
I’m bad at tasting things. I once found a store-bought tiramisu at the back of the fridge and was like “Ooh, tiramisu!” Then I ate some and was like, “Huh this tiramisu is kinda tangy,” and when my wife tasted it, she immediately spat it out and said, “That’s rancid.” We looked at the box and discovered the tiramisu expired several weeks ago. I would say this has permanently harmed my reputation within my family.
That experience left me wondering: just how bad are my taste buds? Like, in a blind test, would I even be able to tell different flavors apart? I know that sight influences taste, of course—there are all sorts of studies dunking on wine enthusiasts: they can’t match the description of a wine to the actual wine, they like cheaper wine better when they don’t know the price, and if you put some red food coloring in white wine, people think it’s red wine.1 But what if I’m even worse than that? What if, when I close my eyes, I literally can’t tell what’s in my mouth?
~~~MATERIALS~~~
My friend Ethan bought four kinds of baby food. They were all purees, so I couldn’t use texture as a clue, and I didn’t look at any of the containers beforehand.
~~~PROCEDURE~~~
I put on a blindfold and tasted a spoonful of each kind of baby food and tried to guess what it was.
~~~RESULTS~~~
Here’s how I did:
~~~DISCUSSION~~~
I would rate my performance as “humiliating”. Butternut squash and sweet potato are pretty similar, so I’ll give myself that one, but what kind of idiot tastes “pear” and thinks “lemon-lime”? I knew in the moment that there was probably no such thing as “lemon-lime” baby food (did Gerber’s acquire Sprite??), but that’s literally what it tasted like, so that’s what I said. Mixing up banana and strawberry was way below even my very low expectations for myself. When I took the blindfold off, people looked genuinely concerned.2
Here’s something interesting that happened: once my friends revealed the identity of each flavor, I immediately “tasted” it. It was like looking at one of those visual illusions that looks like a bunch of blobs and then someone tells you “it’s a parrot!” and suddenly the parrot jumps out at you and you can’t not see the parrot anymore. Except in this case, the parrot was banana-flavored.
My friends and I were hosting a party and we thought it would be funny to ask people to stick their hands in various buckets, just to see how long they would do it. We didn’t exactly have a theory behind this. We just thought something weird might happen.3
~~~MATERIALS~~~
We got two buckets, filled one with ice water, and filled the other bucket with nothing.
~~~PROCEDURE~~~
We flipped a coin to determine which bucket each partygoer (N = 23) would encounter first. (The buckets were in separate rooms, so they didn’t know which one was coming next.) Upon entering each room, we told the participant, “Please put your hand in the bucket for as long as you want.” Then we timed how long they kept their hands in each bucket.
~~~RESULTS~~~
On average, people kept their hands in the ice bucket for 49.26 seconds, and they kept their hands in the empty bucket for 31.57 seconds. The difference between these two averages was not statistically significant.4
But averages aren’t very revealing here, because people differed a lot. Here’s another way of looking at the same data. Each participant has their own row and two dots: a red dot for how long they spent in the empty bucket, and blue dot for how long they spent in the ice bucket. For privacy, all participants’ names have been replaced with the names of Muppets.
~~~DISCUSSION~~~
We learned two things from this study.
People are weird
Putting your hand in a bucket of ice is supposed to be a universally negative experience. It’s known in the science biz as the “cold pressor task,” and they use it to study pain because it hurts so bad. But some people liked it! One guy thanked the experimenter for the opportunity to make his hand really cold, which he enjoyed very much. Another revealed this: you know those drink chillers at the grocery store where you can put a bottle of white wine or a six pack in a vat of icy water and it swirls around and it chills your drink really fast? He used to stick his hand in one of those for fun.
Feeling pointless might hurt worse than feeling pain.
Say what you will about sticking your hand in an ice bucket: it’s something to do. You feel like you’re testing your mettle, your skin changes colors, your fingers tingle and that’s kinda fun. When you put your hand in an empty bucket, nothing happens. You just stand there like an idiot with your hand in a bucket. People think physical pain is inherently negative, like it’s pure badness. But when you lock eyes with Miss Pain Piggy and she holds her hand in ice water for 466 seconds straight, you start to question a lot of assumptions.5
Here’s something that’s always bugged me: people love sugar and salt, right? I mean, duh, of course they do. So why doesn’t anyone pour themselves a big bowl of salt and sugar and chow down? Is it just social norms and willpower preventing us from indulging our true desires? Or is it because pure sugar and salt don’t actually taste that good? Could it be that our relationship with these delicious rocks is, in fact, far more nuanced than simply wanting as much of them as possible?
This study was partly inspired by cybernetic psychology, which posits that the mind is full of control systems that try to keep various life-necessities at the right level. Sugar and salt are both necessary for life, and people certainly do seem to desire both of them. And yet, if you eat too much of them, you die. That sounds like a job for a control system—maybe there’s some kind of body-brain feedback loop trying to keep salt and sugar at the appropriate level, not too high and not too low. One way to investigate a control system is just to put stuff in front of someone and see what they do. That sounded pretty dumb, so that’s what I did.
~~~MATERIALS~~~
I got a measuring spoon marked “1/4 teaspoon” and some salt and sugar.
~~~PROCEDURE~~~
I ran this study at an Experimental History meetup in Cambridge, MA6. I brought people (N = 23)7 to a testing room and sat them at a desk. I first showed them the measuring spoon and asked them, “If I were to give you this amount of sugar, is that something you would like to eat?” If they said “Yes”, I poured 1/4 teaspoon of sugar into a tiny cup and gave it to them. Once they ate it, I asked them to rate the experience from 1 (not good at all) to 5 (very good), and I asked them if they’d like to do it again. If they said yes again, I gave them another 1/4 teaspoon of sugar and got their rating. I repeated this process until they refused the sugar. (Nobody took more than two shots.) Then I repeated the process with 1/4 teaspoon of salt.8 I should have randomized the order, but in all the excitement, I forgot.
~~~RESULTS~~~
About half of the participants flat-out refused the sugar, and two-thirds refused the salt. Anecdotally, many of the people who refused the sugar said something like, “oh, I’d like to eat it, but I shouldn’t.” People did not feel that way about the salt. They were just like, “No thanks”.9
The people who did try the sugar generally liked it. The people who tried the salt did not. (The latter were, by the way, all men.) A few of the guys put on a brave face after downing their salt, but the rest said things like “blech!” and “oh!”. Four people took an extra shot of sugar, and they liked it fine. The two people who took an extra salt shot gave the experience a big thumbs-down.
~~~DISCUSSION~~~
Why don’t people eat big spoonfuls of sugar and salt? For sugar, the answer might be “we think it’s sinful”. But it also might be because raw sugar isn’t actually super delicious. I’m a bit surprised that the sugar ratings weren’t even higher—isn’t sugar supposed to be pure bliss?10 For salt, the answer might just be “it tastes bad on its own”.
It’s weird that people had such strong reactions to such small amounts. There’s about 1g of sugar in 1/4 teaspoon, and a single Reese’s peanut butter cup—a notoriously delicious treat—contains 11x that much. Meanwhile, 1/4 teaspoon of salt is about 1.4g, and I happily ate more than that in a single sitting yesterday via a pile of tater tots dunked in ketchup. For some reason, people seem to find these minerals far more appealing when they’re mixed with other stuff.
Why would that be? Maybe it has to do with how much you need to survive, and how much you can eat before you die.
The estimated lethal dose of salt is 4g per kilogram of body weight, and people really do die from ingesting too much of it. In one case, a Japanese woman had a fight with her husband, drank a liter of shoyu sauce containing an estimated 160g of sodium, and died the next day. In another case, a psychiatric hospital forced a 69-year-old man to drink water containing 216g of salt (they wanted him to throw up because he had ingested his roommate’s medication); he was declared brain dead 36 hours later.
Meanwhile, the estimated lethal dose of sugar is much higher: 30g per kilogram of body weight. An extremely trustworthy-seeming Buzzfeed article called “It’s Actually Pretty Hard to Eat So Much Sugar that You Die” estimates that the average adult would need to eat 680 Hershey’s Kisses, 408 Twix Minis, or 1,360 pieces of candy corn before they kicked the bucket.
It takes a lot longer to eat several pounds of candy than it does to chug a liter of shoyu, so it’s easier for the body to defend against a sugar overdose than a salt overdose (by making you feel nauseous, cramping, throwing up, etc.). The best way to avoid death by salt, then, is to avoid eating large doses of salt in the first place, and the best way to do that is to make it taste bad. Maybe that’s why the same amount of salt tastes nasty when served raw and tastes delicious when sprinkled over a basket of french fries or dissolved in a bowl of soup—you’re only getting a little bit at a time, so you won’t shoot past your target level.11
Anyway, these results suggest that we do not “love” sugar and salt. We love a certain amount of sugar and salt, consumed at a certain rate, and perhaps even in a certain ratio to other nutrients. The results also suggest that coming to an Experimental History meetup is a super fun and cool time.
I’m showing you Dumb Studies as if they’re something new. But they’re not. At the beginning of science, all studies were Dumb.
Robert Boyle, the father of chemistry, did a thorough investigation of a piece of veal that was weirdly shiny (results inconclusive). Antonie van Leeuwenhoek, the father of microbiology, blew smoke at worms to see if it would kill them (it didn’t). Robert Hooke, the father of a bunch of stuff12, sprinkled some flour on a glass plate and then ran a bow along the side like he was playing the fiddle and was like “ooh look at the lines the vibration makes”. These studies looked stupid even then, and people duly ridiculed them for it.
Ever since then, the most groundbreaking scientists have always spent a big chunk of their time—perhaps most of their time—goofing around. Francis Galton, the guy who invented like 10% of modern science13, took a secret whistle to the zoo and whistled at all the animals (the lions hated it). Barbara McClintock learned how to control her perception of pain so she wouldn’t need anesthesia during dental procedures. Richard Feynman did about a million Dumb Studies, including a demonstration that urination isn’t driven by gravity because you can pee standing on your head. The neurologist V.S. Ramachandran was able to temporarily turn off amputees’ phantom limbs by squirting water in their ears and making them look at mirrors. They all had what I call experimenter’s urge: the desire to, quite literally, fuck around and find out.
After science became a profession, we started expecting our science to look very science-y, no Dumb Studies allowed. On top of that, the replication crisis left us all with a cop mentality that treats anything fun as suspicious. People want to blame the slowdown of scientific progress on the “burden of knowledge”14 or “ideas getting harder to find”—I disagree, and will fight such people, but I do agree that we're suffering under a modern burden: the burden of respectability.15
There’s a time and a place for the Serious Study. Sometimes you’re spending millions of dollars, for instance, and you can’t afford to be loosey-goosey with the procedure. But reality is very weird, and if you ever want to understand it, you have to bump into it over and over, in as many places and from as many angles as possible. You need the freedom to be Dumb. You must inspect the shining meat, you must pee standing on your head, and you must, I submit, eat this baby food.
I thought this was common knowledge, but apparently it’s not. In this New Yorker article about blind wine taste-testing, a professor of “viticulture and enology” confidently states that no one would ever mix up red wine and white wine right before the author does exactly that.
Come to think of it, why is “strawberry-banana” such a common flavor? Where did that come from?
An initial writeup of this experiment was previously published in the first, and so far only, issue of The Loop.
Specifically, t(22) = 0.69, p = .50. The more statistically-minded folks might be wondering: “Did you have enough power to detect an effect here? You only had 23 participants, after all.” Great question! With N = 23, we have about an 80% chance to detect an effect of d = .6 with a two-tailed paired t-test. That’s roughly what we consider a “medium” effect, based on something one statistician said once. To put that in context, the standardized effect of SSRIs on depression is .4, the effect of ibuprofen on arthritis pain is .42, and the effect of “women being more empathetic than men” is .9. The Bayes factor for this difference is .27, meaning moderately strong evidence for the null, according to something another statistician said once. So we can’t say there’s no difference between the empty bucket and the ice bucket, but if there is any difference, we can be pretty confident that it isn’t large.
Maybe this is also why, if you leave people alone in an empty room with a shock machine, they will voluntarily shock themselves.
This meetup was co-hosted with The Browser, a terrific newsletter that curates interesting internet stuff.
Not a typo; somehow every party I host ends up with 23 people at it. (Some people were there both times, but most weren’t.)
The party took place in the afternoon and had both salty and sweet snacks available, so each person was coming in with a different amount of sugar and salt already in their system.
This was one of the fun parts of running Dumb Studies: you let people do interesting things. In a Serious Study in psychology, for instance, we don’t usually let people say “no”. I mean, we do, obviously, for ethical reasons, but if they refuse some part of the study, then the study is over.
When you run a Dumb Study, you can treat all behavior as data. If someone doesn’t want to put their hand in the empty bucket or they don’t want to eat the salt, that’s not noise. That’s a result.
I don’t quite have enough data to tell, maybe there’s more variance in sugar preferences than there is in salt preferences. One participant remarked that he had eaten pure sugar earlier that day. And “salt tooth”, while apparently a thing, is far less common than “sweet tooth”, and it sounds like a D-list pirate name.
This would also predict that Gatorade tastes better after a run on a hot day; you’ve sweated out some of your sugar and salt stores, so your taste buds give you a thumbs-up for re-up.
For instance, Hooke’s law, Hooke’s joint, Hooke’s instrument, and Hooke’s wheel.
To be clear, both good stuff and bad stuff
Supposedly the “last man who knew everything” was English polymath Thomas Young, who died in 1829.
Weirdly enough, “being respectable” does not include “posting your data and code”, which most studies do not do.
2025-05-27 22:41:00
We are currently living through the greatest experiment humankind has ever tried on itself, an experiment called the internet. As a species, we’ve done some wacky things before—domesticating wolves, planting seeds in the ground, having sex with Neanderthals, etc.—but all of those played out over millennia, whereas we’re kinda doing this one in a single lifetime.
So far the results are, I would say, mixed. But the weirdest part is that most people act like they’re spectators to this whole thing, like, “Oh, I have nothing to do with the outcome of this species-wide experiment, that’s up to other people. Hope it turns out good!” That sentiment makes no sense, because the internet is us. There are no sidelines there. Whatever you write, read, like, forward, comment on, subscribe to, pay for—that thing gets bigger. So if this experiment is gonna work, it’s because we make it work.
In her new book, the historian Ada Palmer argues that what made the Renaissance different was that “people said it was different, believed it was different, and claimed and felt that they were part of a project the transform the world on an unprecedented scale.”
Well, I feel that way right now. If you do too, let’s make a Renaissance.
The blogosphere has a particularly important role to play, because now more than ever, it’s where the ideas come from. Blog posts have launched movements, coined terms, raised millions, and influenced government policy, often without explicitly trying to do any of those things, and often written under goofy pseudonyms. Whatever the next vibe shift is, it’s gonna start right here.
The villains, scammers, and trolls have no compunctions about participating—to them, the internet is just another sandcastle to kick over, another crowded square where they can run a con. But well-meaning folks often hang back, abandoning the discourse to the people most interested in poisoning it. They do this, I think, for three bad reasons.
One: lots of people look at all the blogs out there and go, “Surely, there’s no room for lil ol’ me!” But there is. Blogging isn’t like riding an elevator, where each additional person makes the experience worse. It’s like a block party, where each additional person makes the experience better. As more people join, more sub-parties form—now there are enough vegan dads who want to grill mushrooms together, now there’s sufficient foot traffic to sustain a ring toss and dunk tank, now the menacing grad student next door finally has someone to talk to about Heidegger. The bigger the scene, the more numerous the niches.
Two: people will keep to themselves because they assume that blogging is best left to the professionals, as if you’re only allowed to write text on the internet if it’s your full-time job. The whole point of this gatekeeper-less free-for-all is that you can do whatever you like. Wait ten years between posts, that’s fine! The only way to do this wrong is to worry about doing it wrong.
And three: people don’t want to participate because they’re afraid no one will listen. That’s certainly possible—on the internet, everyone gets a shot, but no one gets a guarantee. Still, I’ve seen first-time blog posts go gangbusters simply because they were good. And besides, the point isn’t to reach everybody; most words are irrelevant to most people. There may be six individuals out there who are waiting for exactly the thing that only you can write, and the internet has a magical way of switchboarding the right posts to the right people.
If that ain’t enough, I’ve seen people land jobs, make friends, and fall in love, simply by posting the right words in the right order. I’ve had key pieces of my cognitive architecture remodeled by strangers on the internet. And the party’s barely gotten started.
But I get it—it takes a little courage to walk out your front door and into the festivities, and it takes some gumption to meet new people there. That’s why I’m running the Second Annual Experimental History Blog Post Competition, Extravaganza, and Jamboree.
Submit your best unpublished blog post, and if I pick yours, I’ll send you real cash money and I’ll tell everybody I know how great you are.
You can see last year’s winners and honorable mentions here. They included: self-experiments, travelogues, tongue-in-cheek syllabi, reviews of books that don’t exist, literary essays, personal reveries, and one very upsetting post about picking your nose. The authors were sophomores, software engineers, professors, filmmakers, public health workers, affable Midwesterners, and straight up randos and normies. So there’s no one kind of thing I’m looking for, and no one kind of person I’m looking to.
That said, if you’re looking for some inspiration, here are some triumphs of the form:
Book Reviews: On the Natural Faculties, The Gossip Trap, Progress and Poverty, all of The Psmith’s Bookshelf
Deep Dives: Dynomight on air quality and air purifiers, Higher than the Shoulders of Giants, or a Scientist’s History of Drugs, How the Rockefeller Foundation Helped Bootstrap the Field of Molecular Biology, all of Age of Invention
Big Ideas: Ads Don’t Work That Way, On Progress and Historical Change, Meditations on Moloch, Reality Has a Surprising Amount of Detail, 10 Technologies that Won’t Exist in 5 Years
Personal Stories/Gonzo Journalism: No Evidence of Disease, It-Which-Must-Not-Be-Named, adventures with the homeless people outside my house, My Recent Divorce and/or Dior Homme Intense, The Potato People
Scientific Reports/Data Analysis: Lady Tasting Brine, Fahren-height, A Chemical Hunger, The Mind in the Wheel, all of Experimental Fat Loss, all of The Egg and the Rock
How-to and Exhortation: The Most Precious Resource Is Agency, How To Be More Agentic, Things You’re Allowed to Do, Are You Serious?, 50 Things I Know, On Befriending Kids
Good Posts Not Otherwise Categorized: The biggest little guy, Baldwin in Brahman, The Alameda-Weehawken Burrito Tunnel, Bay Area House Parties (1, 2, 3, etc.), Alchemy is ok, Ideas Are Alive and You Are Dead, If You’re So Smart Why Can’t You Die?, A blog post is a very long and complex search query to find fascinating people and make them route interesting stuff to your inbox
And of course:
Last year’s winners: We’re not going to run out of new anatomy anytime soon, The Best Antibiotic for Acne is Non-Prescription, and Medieval Basket Weaving
(By the way, if you have some all-time great blog posts, please leave them in the comments! I’d love to expand this list.)
Paste your post into a Google Doc.
VERY IMPORTANT STEP: Change the sharing setting is “Anyone with the link”. This is not the default setting, and if you don’t change it, I won’t be able to read your post.
First place: $500
Second place: $250
Third place: $100
I’ll also post an excerpt of your piece on Experimental History and heap praise upon it, and I’ll add your blog to my list of Substack recommendations for the next year. You’ll retain ownership of your writing, of course.
Only unpublished posts are eligible. As fun as it would be to read every blog post ever written, I want to push people to either write something new or finish something they’ve been sitting on for too long. You’re welcome to publish your post after you submit it. If you win, I’ll reach out beforehand and ask you for a direct link to your post so I can include it in mine.
One entry per person. Multiple authors is fine.
There’s technically no word limit, but if you send me a 100,000 word treatise I probably won’t finish it.
You don’t need to have a blog to submit, but if you win and you don’t have one, I will give you a rousing speech about why you should start one.
Previous top-three winners are not eligible to win again, but honorable mentions are.
Uhhh otherwise don’t break any laws I guess??
Submissions are due July 1. Submit here.
2025-05-13 21:49:57
I’ve complained a lot about the state of psychology, but eventually it’s time to stop whining and start building. That time is now.
My mad scientist friends who go by the name Slime Mold Time Mold (yes, really) have just published a book that lays out a new foundation for the science of the mind. It’s called The Mind in the Wheel, and it’s the most provocative thing I’ve read about psychology since I became a psychologist myself—this is probably the first time I’ve felt surprised by something in the field since 2016. It’s maybe right, it’s probably wrong, but there’s something here, something important, and anybody with a mind ought to take these ideas for a spin. I realize some people are skittish about reading books from pseudonymous strangers on the internet—isn’t that what your mom warned you not to do?—but baby, that’s what I’m here for! So let’s go—
Lots of people agree that psychology is stuck because it doesn’t have a paradigm, but that’s where the discussion ends. We all pat our pockets and go, “paradigm, paradigm...uh...hmm, I seem to have left mine at home, do you have one?”
Our minds turn to mush at this point because nobody has ever been clear on what a paradigm is. Thomas Kuhn, the guy who coined the term, was famously hard to understand.1 People assumed that “paradigm shift” just meant “a big change” and so they started using the term for everything: “We used to wear baggy jeans, now we wear skinny jeans! Paradigm shift!”
So let’s get clear: a paradigm is made out of units and rules. It says, “the part of the world I’m studying is made up of these entities, which can do these activities.”
In this way, doing science is a lot like reverse-engineering a board game. You have to figure out the units in play, like the tiles in Scrabble or the top hat in Monopoly. And then you have to figure out what those units can and can’t do: you can use your Scrabble tiles to spell “BUDDY” or “TREMBLE”, but not “GORFLBOP”. The top hat can be on Park Place, it can be on B&O Railroad, but it can never inside your left nostril, or else you’re not playing Monopoly anymore.
A paradigm shift is when you make a major revision to the list of units or rules. And indeed, when you look back at the biggest breakthroughs in the history of science, they’re all about units and rules. Darwin’s big idea was that species (units) can change over time (rule). Newton’s big idea was that the rules of gravitation that govern the planets also govern everything down here on Earth. Atomic theory was a proposal about units (all matter is made up of things called atoms) and it came with a lot of rules (“atoms always combine in the same proportions”, “matter can’t be created or destroyed”, etc.). When molecular biologists figured out their “central dogma” in the mid-1900s, they expressed it in terms of units (DNA, RNA, proteins) and what those units can do (DNA makes RNA, RNA makes proteins).
If all this sounds obvious, that’s great. But in the ~150 years that psychology has existed, this is not what we’ve been doing.
When you’re making and testing conjectures about units and rules, let’s call that science. It’s easy to do two other things that look like science, but aren’t, and this is unfortunately what a lot of research in psychology is like.
First, we can do studies without any inkling about the units and rules at all. You know, screw around and find out! Just run some experiments, get some numbers, do a few tests! A good word for this is naive research. If you’re asking questions like “Do people look more attractive when they part their hair on the right vs. the left?” or “Does thinking about networking make people want soap?” or “Are people less likely to steal from the communal milk if you print out a picture of human eyes and hang it up in the break room?” you’re doing naive research.2
The name is slightly pejorative, but only slightly, and for good reason. On the one hand, some proportion of your research should be naive, because there’s always a chance you stumble onto something interesting. If you’re locked into a paradigm, naive research may be the only way you discover something that complicates the prevailing view.
On the other hand, you can do naive research forever without making any progress. If you’re trying to figure out how cars work, for instance, you can be like, “Does the car still work if we paint it blue?” *checks* “Okay, does the car still work if we...paint it a slightly lighter shade of blue??”
(As SMTM puts it: “To get to the moon, we didn’t build two groups of rockets and see which group made it to orbit.”)
There’s a second way to do research that’s non-scientific: you make up a bunch of hand-wavy words and then study them. A good name for this is impressionistic research. If you’re studying whether “action-awareness merging” leads to “flow”, or whether students’ “math self-efficacy” mediates the relationship between their “perceived classroom environment” and their scores on a math test, or whether “mindfulness” causes “resilience” by increasing “zest for life”, you are doing impressionistic research.
The problem with this approach is that it gets you tangled up in things that don’t actually exist. What is “zest for life”? It is literally “how you respond to the Zest for Life Scale”. And what does the Zest for Life Scale measure? It measures...zest for life. If you push hard enough on any psychological abstraction, you will eventually find a tautology like this. This is why impressionistic research makes heavy use of statistics: the only way you can claim you’ve discovered anything is to produce a significant p-value.
Naive and impressionistic research are often respectable-looking ways to go nowhere. For example, if you were trying to understand Monopoly using the tools of naive research, you might start by correlating “the number that appears on the dice” with “money earned”. That sounds like a reasonable idea, but you’d end up totally confused—sometimes people get money when they roll higher numbers, but sometimes they roll higher numbers and lose money, and sometimes they gain or lose money without rolling at all. These inconsistent results could spawn academic feuds that play out over decades: “The Monopoly Lab at Johns Hopkins finds that rolling a four is associated with an increase in wealth!” “No, the Monocle Group at UCLA did a preregistered replication and it actually turns out that odd numbers are good, but only if you fit a structural equation model and control for the past ten rolls!”
The impressionistic approach would be even more hopeless. At least dice and dollars are actual parts of the game; if you start studying abstractions like “capitalism proneness” and “top hat-titude”, you can spin your wheels forever. The only way you’ll ever understand Monopoly is by making guesses about the units and rules of the game, and then checking whether your guesses hold up. Otherwise, you might as well insert the top hat directly into your left nostril.
We’re going to get to psychology in a second, but first we have to avoid a very tempting detour. Whenever I talk to people about the units and rules of psychology, they’re immediately like, “Oh, so you’re saying psychology should be neuroscience. The units are neurons and—”
Lemme stop you right there, because that’s not where we’re going.
Let’s say you’re trying to fix the New York City transit system, so you’re thinking about trains, stations, passengers, etc. All of those things are made of smaller units, but you don’t get better at designing the system by thinking about the smallest units possible. If you start asking questions like, “How do I use a collection of iron atoms to transport a collection of carbon and hydrogen atoms?” you’ll miss the fact that some of those carbon and hydrogen atoms are in the shape of butts that need seats, or that some of them are in the shape of brains that need to be told when the train is arriving.
Those smaller units do matter, because you’re constrained by what they can and can’t do—you can’t build a train that goes faster than the speed of light, and you can’t expect riders to be able to phase through train doors like those two twin ghosts with dreadlocks from the second Matrix movie. But lower-level truths like the Plank constant, the chemical makeup of the human body, the cosmic background radiation of the universe, etc., are not going to help you figure out where to put the elevators in Grand Central, nor will they tell you how often you should run an express train.
Another example for all you computer folks out there: ultimately, all software engineering is just moving electrons around. But imagine how hard your job would be if you could only talk about electrons moving around. No arrays, stacks, nodes, graphs, algorithms—just those lil negatively charged bois and their comings and goings. I don’t know a lot about computers, but I don’t think this would work. Psychology is similar to software; you can’t touch it, but it’s still doing stuff.
So yes, anything you posit at the level of psychology has to be possible at the level of neuroscience. And everything in neuroscience has to be possible at the level of biochemistry, etc., all the way down, and ultimately it’s all at the whims of God. But if you try to reduce any of those levels to be “just” the level below it, you lose all of its useful detail.
That’s why we won’t be talking about neurons or potassium ions or whatever. We’re gonna be talking about thermostats.
Here’s the meat of The Mind in the Wheel: the mind is made out of units called control systems.
I talked about control systems before in You Can’t Be Too Happy, Literally, but here’s a brief recap. The classic example of a control system is a thermostat. You set a target temperature, and if the thermostat detects a temperature that’s lower than the target, it turns on the heat. If the temperature is higher than the target, it turns on the A/C. The difference between the target temperature and the actual temperature is called the “error”, and it’s the thermostat’s job to minimize it. That’s it! That’s a control system.
It seems likely that the mind contains lots of control systems because they are a really good way not to die. Humans are fragile, and we need to keep lots of different things at just the right level. We can’t be too warm or too cold. We need to eat food, but not too much. We need to be horny sometimes in order to reproduce, but if you’re horny all the time, you run into troubles of a different sort.
The science of control systems is called cybernetics, so let’s call this approach cybernetic psychology. It proposes that the mind is a stack of control systems, each responsible for monitoring one of these necessities. The units are the control systems themselves and their components, and the rules are the way those systems operate. Like a thermostat, they monitor some variable out in the world, compare it to the target level of that variable, and then act to reduce the difference between the two. For simplicity, we can refer to this error-reduction component as the “governor”. Unlike a simple thermostat, however, governors are both reactive and predictive—they try to reduce errors that have occurred, and they try to prevent those errors from occurring in the first place.
Every control system has a target level, and they each produce errors when they’re above or below that target. In cybernetic psychology, we call those errors “emotions”. So hunger is the error signal from the Nutrition Control System, pain is the error signal from the Body Damage Prevention System, and loneliness is the error signal from the Make Sure You Spend Time with Other People System.
(I’m making these names up; we don’t yet know how many systems there are, or what they control.)
Some of these emotions will probably correspond to words that people already use, but some won’t. For instance, “need to pee” will probably turn out to be an emotion, because it seems to be the error signal from some kind of Urine Control System. “Hunger” will probably turn out to be several emotions, each one driving us to consume a different macronutrient, or maybe a different texture or taste, who knows. If one of those drives is specifically for sugar, it would explain why people mysteriously have room for dessert after eating everything else: the Protein/Carbs/Fiber/etc. Control Systems are all satisfied, but the Sugar Control System is not.
The Sims actually did a reasonable job of identifying some of these drives:
I worry that all of this is sounding too normal so far, so let’s get weirder.
In cybernetic psychology, “happiness” is not an emotion, because it’s not an error from a control system. Instead, happiness is the result of correcting errors. As SMTM puts it, “Happiness is what happens when a thirsty person drinks, when a tired person rests, when a frightened person reaches safety.” It’s kind of like getting $200 for passing “Go” in Monopoly.
I’ll return to this later because I’ve got a bone to pick with it, but for now I just want to point out that “emotion” means something different in cybernetics than it does in common parlance, and that’s on purpose, because repurposing words is a natural part of paradigm-shifting. (In Aristotelian physics, for instance, “motion” means something different. If your face turns red, it is undergoing “motion” in terms of color.) When you get too familiar with the words you’re using, you forget that each one is packed with assumptions—assumptions that might be wrong, but you’ll never know unless you bust ‘em open like a piñata.
Okay, if the mind is made out of these cybernetic systems and their governors, what we really want to know is: how many are there? How do they work?
We’re not doing impressionistic research here, so we can’t just create control systems by fiat, the way you can create “zest for life” by creating a Zest for Life Scale. Instead, discovering the drives requires a new set of methodologies. You might start by noticing that people seem inexplicably driven to do some things (like play Candy Crush) or inexplicably not driven to do other things (like drink lemon juice when they’re suffering from scurvy, even though it would save their life). This could give you an inkling of what kind of drives exist. Then you could try to isolate one of those drives through methods like:
Prevention: If you stop someone from playing Candy Crush, what do they do instead?
Knockout: If you turn off the elements of Candy Crush one at a time—make it black and white, eliminate the scoring system, etc.—at what point do they no longer want to play?
Behavioral exhaustion (knockout in reverse): If you give people one component of Candy Crush at a time—maybe, categorizing things, earning points, seeing lots of colors, etc.—and let them do that as much as they want, do they still want to play Candy Crush afterward?
(See the methods sections for more).
With a few notable exceptions, you can pretty much only do one thing at a time, and each governor has a different opinion on what that thing should be. So unlike the thermostat in your house, which doesn’t have to contend with any other control systems, all of the governors of the mind have to fight with each other constantly. While we’re discovering the drives, then, we also have to figure out how the governors jockey for the right to choose behaviors.
For example, the Oxygen Governor can get a lot of votes really fast; no matter how cold, hungry, or lonely you are, you’ll always attend to your lack of air first. The Pain Governor can be overridden at low error levels (“my ankle kinda hurts but I have to finish this 5k”) but it gets a lot of sway at high error levels (“my ankle hurts so much I literally can’t walk on it”). Meanwhile, people can be really lonely for a long time without doing much about it, suggesting that the Loneliness Governor tops out at relatively few votes, or that it has a harder time figuring out what to vote for.
From the get go, this raises a lot of questions. What are the governors governing—is the Loneliness Governor paying attention to something like eye contact or number of words spoken, or is it monitoring some kind of super abstract measure of socialization that we can’t even imagine yet? What happens when two governors are deadlocked? What happens when the vote is really close, is there a mental equivalent of a runoff election? And how do these governors “learn” what things to vote for? No one knows yet, but we’d like to!
Here’s where cybernetics really pops off: if you’re on board so far, you’ve already got a theory of personality and psychopathology.
If the mind is made out of control systems, and those control systems have different set points (that is, their target level) and sensitivities (that is, how hard they fight to maintain that target level), then “personality” is just how those set points and sensitivities differ from person to person. Someone who is more “extraverted”, for example, has a higher set point and/or greater sensitivity on their Sociality Control System (if such a thing exists). As in, they get an error if they don’t maintain a higher level of social interaction, or they respond to that error faster than other people do.
This is a major upgrade to how we think about personality. Right now, what is personality? If you corner a personality psychologist, they’ll tell you something like “traits and characteristics that are stable across time and situations”. Okay, but what’s a trait? What’s a characteristic? Push harder, and you’ll eventually discover that what we call “personality” is really “how you bubble things in on a personality test”. There are no units here, no rules, no theory about the underlying system and how it works. That’s why our best theory of personality performs about as well as the Enneagram, a theory that somebody just made up.
But that’s not all—cybernetics also gives you a systematic way of thinking about mental illness. When you lay out all the parts of a control system, you’ll realize that there are lots of ways it can break down, and each malfunction causes a different kind of pathology.
For instance, if you snip that line labeled “error”, you knock out almost the entire control system. There’s no voting, no behavior, and no happiness generated. You just sit there feeling nothing, which certainly sounds like a kind of depression. On the other hand, if all of your errors get turned way up, then you get tons of voting, lots of behavior, and—if your behavior is successful—lots of happiness. That sounds like mania. (And so on.)
This is how units-and-rules thinking can get you farther than naive or impressionistic research. Right now, we describe mental disorders based on symptoms. Like: “You feel depressed because you feel sad.” We have no theory about the underlying system that causes the depression. Instead, we produce charts filled with abstractions, like this:
Imagine how hopeless we would be if we approached medicine this way, lumping together both Black Lung and the common cold as “coughing diseases”, even though the treatment for one of them is “bed rest and fluids” and the treatment for the other one is “get a different job”. This is, unfortunately, about the best we can do with a symptoms-based approach—maybe we’ll rearrange the chart as our statistical techniques get better, but we’ll never, ever cure depression. This is why we need a blueprint instead of a list: if we can trace a malfunction back to the part that broke, maybe we can fix it.
When we’re doing that, we’ll probably discover that what we think of as one thing is actually many things. “Depression”, for instance, may in fact be 17 different disorders. I mean, c’mon, one symptom of “depression” is sleeping too much, and another is sleeping too little. One symptom is weight gain; another is weight loss. Some people with depression feel extremely sad; others feel nothing. It’s crazy that we use one word to describe all of these syndromes, and it probably explains why we’re not very good at treating them.
I think there’s a lot of promise here, but now let me attack this idea a little bit, so you can see how disputes work differently when we’re working inside a paradigm.
To SMTM, happiness is not an emotion because it isn’t an error signal. Instead, it’s the thing you get for correcting an error signal. Eat a burrito when you’re hungry = happiness. Talk to a friend when you’re lonely = happiness. SMTM suspect that happiness operates like an overall guide for explore/exploit: when you got a lot of happiness in the tank, keep doing the things you’re doing. When you’re low, change it up.
I think there’s something missing here. When you really gotta pee and you finally make it to a bathroom, that feels good. When you study all month for a big exam and then you get a 97%, that feels good too. But they feel like different kinds of good. The first kind is intense but fleeting; the second is more of a slow burn that could last for a whole day. I don’t see how you accomplish this with one common “pot” of happiness.
Or have you ever been underwater for a little too long? When you finally reach the surface, I guess it feels “good” to breathe again, but once you catch your breath, it’s not like you feel ecstatic. You feel more like, I dunno, what’s the emotion for “BREATHING IS VERY IMPORTANT, I WOULD ALWAYS LIKE TO BREATHE”?
I see two ways to solve this problem. One is to allow for several types of positive signals, so not all error correction gets dumped into “happiness”. Maybe there’s a separate bucket called “relief” that fills up when you correct dangerous errors like pain or suffocation. Unlike happiness, which is meant to encourage more of the same behaviors, relief might be a signal to do less of something.
Another solution is to allow for different governors to have different ratios between error correction and happiness generation. Right now we’re assuming that every unit of error that you correct becomes a unit of happiness gained. Let’s say that you’re really hungry, and your Hunger Governor is like “I GIVE THIS A -50!” and then you eat dinner and not only does your -50 go away, but you also get +50 happiness, that kind of feeling where you pat your belly and go “Now that’s good eatin’!”. (That’s my experience, anyway.) But maybe other governors work differently. If you feel like you’re drowning, your Oxygen Governor is like “I GIVE THIS A -1000!”. When you can breathe again, though, maybe you only get the -1000 to go away, and you don’t get any happiness on top of that. You feel much better than you did before, but you don’t feel good. You don’t pat your lungs and go, “Now that’s good breathin’!”
Ultimately, the way to test these ideas would be to build something. In this case, you’d start by building something like a Sim. If you program a lil computer dude with all of our conjectured control systems, does it act like a human does? Or does it keep half-drowning itself so it can get the pleasure of breathing again? Even better, if you build your Sim and it looks humanlike, can you then adjust the parameters to make it half-drown itself? After all, most people do not get their jollies from starving themselves of oxygen, but a few do3, so we ought to be able to explain even rare and pathological behavior by poking and prodding the underlying systems. I don’t think any of this would be easy, but unlike impressionistic research, it least has a chance of being productive.
So far, I’ve been talking like this cybernetics thing is mostly right. To be clear, I expect this to be mostly wrong. This might end up being a totally boneheaded way to think about psychology. That’s fine! The point of a paradigm is to be wrong in the right direction.
The philosopher of science Karl Popper famously said that real science is falsifiable. I think he didn’t go far enough. Real science is overturnable. That is, before something is worth refuting, it has to be worth believing. “I have two heads” is falsifiable, but you’d be wasting your time falsifying it. First we need to stake out some clear theoretical commitments, some strong models that at least some of us are willing to believe, and only then can we go to town trying to falsify them. Cunningham’s Law states, “The best way to get the right answer on the Internet is not to ask a question; it’s to post the wrong answer.” Science works the same way, and the bolder we can make our wrong answers, the better our right answers will be.
This has certainly been true in our history. The last time psychology made a great leap forward was when behaviorism went bust. Say what you will about John Watson and B.F. Skinner, but at least they believed in something. Their ideas were so strong and so specific, in fact, that a whole generation of scientists launched their careers by proving those ideas wrong.4 This is what it looks like to be wrong in the right direction: when your paradigm eventually falls, it sinks to the bottom of the ocean like a dead whale and a whole ecosystem grows up around its carcass.
When we killed behaviorism, though, we did not replace it with a set of equally overturnable beliefs. Instead, we kinda decided that anything goes. If you want to study whether people remember circles better than squares, or whether taller people are also more aggressive, or whether babies can tell time, that’s all fine, as long as you can put up some significant p-values.5 The result has been a decades-long buildup of findings, and each one has gone into its own cubbyhole. We sometimes call these things “theories” or “models”, but when you look closely, you don’t see a description of a system, but a way of squishing some findings together. Like this:
This is what impressionistic research looks like. You can shoehorn pretty much anything into a picture like that, and then you can argue for the rest of your career with people who prefer a different picture. Or, more often, everyone can make their own pictures, add ‘em to the pile, and then we all move on. Nothing gets overturned, only forgotten.
When you work with units and rules, it looks more like this:
It’s not that we should use boxes and lines instead of rainbows. It’s that the boxes and lines should mean something. This diagram claims that the “primary rate gyro package”, whatever that might be, is a critical component of the system. Without it, the “attitude control electronics” wouldn’t know what to do. If you remove any of those boxes and the system still works, you know you got the diagram wrong. (Of course, if you zoom into any of those blocks, you’ll find that each of them contains its own world of units; it’s units all the way down.) This is very different from the kinds of boxes and lines we produce right now, which contain a mishmash of statistics and abstractions:
When you’re doing impressionistic research like that, you can accommodate anything. That’s what I often find when I talk to psychologists about units and rules—they’ll light up and go, “Oh yes! I already do that!” And then they’ll describe something that is definitely not that. Like, “I study attention, and I find that people pay more attention when you make the room cold!” But...what is attention? Where does it go on the blueprint? The fact that we have a noun or a verb for something does not mean that it’s a unit or a rule. Until you know what board game you’re playing, you’re stuck describing things in terms of behaviors, symptoms, and abstractions.6
So, look. I do suspect that key pieces of the mind run on control systems. I also suspect that much of the mind has nothing to do with control systems at all. Language, memory, sensation—these processes might interface with control systems, but they themselves may not be cybernetic. In fact, cybernetic and non-cybernetic may turn out to be an important distinction in psychology. It would certainly make a lot more sense than dividing things into cognitive, social, developmental, clinical, etc., the way we do right now. Those divisions are given by the dean, not by nature.
But really, I like cybernetic psychology because it stands a chance of becoming overturnable. And I’d love to see it overturned! We’d learn a lot in the process, the same way overturning a rock in the woods reveals a whole new world of grubs ‘n’ worms ‘n’ things. I’d love to see other overturnable approaches, too, other paradigms that propose different universes of units and rules. If you hate control systems, that’s fine, what else you got? I like how cybernetics has unexpected implications for learning, animal welfare, and artificial intelligence, that’s fun for me, that tickles the underside of my brain, so if your paradigm also connects things that otherwise seem to have nothing to do with each other, please, tickle away!7
In a healthy scientific ecosystem, this kind of thing would be happening all the time. We’d have lots of little eddies and enclaves of people doing speculative work, and they’d grow in proportion to their success at explaining the universe. Alas, that’s not the world we have, but it’s the one we ought to build. If only we had more Zest for Life!
In his defense, Kuhn didn’t expect his book to blow up like it did, and so what he published was basically a rough draft of the idea. He spent the rest of his career trying to counter people’s misperceptions and critiques, but this didn’t really clear anything up for reasons that will be understandable to anyone who has gone several rounds with a peer reviewer.
It’s worth noting that two of these findings (the “networking makes you feel dirty” effect and the “eyes in the break room make you steal less milk” effect) have been the subject of several failed replications. The networking study was done by a researcher credibly suspected of fraud, but even if there wasn’t foul play, we should expect the results of naive research to be flimsy. If you have no idea how the underlying system works, then you have no idea why your effect occurred or how to get it again. The methods section of your paper is supposed to include all of the details necessary to replicate your effect, but this is of course a joke, because in psychology nobody knows which details are necessary to replicate their effects.
Apparently some folks even like to hold their pee for a long time so they can achieve a pee-gasm, so we should be able to model this as well. (I promise that link is as safe for work as it could be.)
Noam Chomsky, for instance, got famous for pointing out that behaviorism could not explain how kids acquire language. William Powers, the guy who first tried to apply cybernetics to psychology in the 1970s, was still beating up on behaviorism, decades after it had been dethroned. (Powers’ ideas were hot for a second and then went dormant for 50 years and no one knows why.)
Note that judgment and decision making, Prospect Theory, heuristics and biases, etc.—which are perhaps psychology’s greatest success since the fall of behaviorism—are themselves an overturning of expected utility theory.
I’ve now run into a few psychologists who are certain that their corner of the field has this nailed down, but whenever they lay out their theory, this is always the thing missing—there’s nothing left to fill in, nothing that unifies things we would intuitively see as separate, or separates things we would intuitively see as unified. But look, I err on the side of being too cynical. If you’ve got this figured out, great! Please do the rest of psychology next.
This is the most common failure mode for The Mind in the Wheel, but there are two others I’ve encountered. Some people think it’s too old: they’ll go, “Oh, this is just...” and then they’ll name some old-timey research that kinda sorta bears a resemblance and assume that settles things. Or they’ll think it’s too new: “Things are mostly fine right now, so why listen to these internet weirdos?” I find this usually breaks down by age—old people want to dismiss it, young people want to understand it. And hey, maybe the old timers will ultimately be proven right, but they’re definitely wrong to feel so confident about it, because no one knows how this will pan out. I always find it surprising when I meet someone whose job is to make knowledge and yet they seem really really invested in not thinking anything different from whatever they think right now.